Prof. Paul R. Baumann
Emeritus Professor of Geography
State University of New York
College at Oneonta
Oneonta, NY 13820

COPYRIGHT © 2022 Paul R. Baumann



The period from 1960 to 2000 brought about some major changes in the field of remote sensing.

1. The term "remote sensing" was initially introduced in 1960.

2. The primary platform used to carry remotely sensed instruments shifted from piloted planes to satellites.

3. Imagery became mainly digital in format rather than analog.

4. Sensors recorded the Earth's surface simultaneously in several different portions of the electro-magnetic spectrum.

5. The turbulent social movements of the 1960s and 1970s brought new and continuing concern about the changes in the Earth's environment. Remote sensing played, and continues to play, a key role in making people aware of what was/is happening on the Earth's surface.


CORONA/Keyhole Program

 

From the beginning of the U-2 program it was recognized that within a short time the Soviet Union would find a way to bring down a U-2. Thus, the United States embarked in 1956, about the time of the first flight of a U-2 over the Soviet Union, to develop a reconnaissance system based on orbiting satellites. This system later would be called CORONA. On August 19, 1960, when Gary Powers was on trial, the first successful air catch was made of a capsule of exposed film ejected from a photographic reconnaissance satellite. The satellite had made seven passes over the denied territory and 17 orbits of the Earth. From this imagery 64 Soviet airfields and 26 new surface-to-air (SAM) sites were identified. Satellites were not included in the denied territory agreement. When the Soviets launched Sputnik I in 1957, they effectively established a worldwide "open skies" policy for items placed in orbit.

The below diagram shows how the satellite functioned. The film was stored in a supply cassette, shown in green on the diagram. When the appropriate command was given the film advanced behind the stereo panoramic cameras. The cameras are displayed in blue on the diagram. Once a picture was taken the film moved up on the take-up cassette shown in brown. At some point it would be determined that the take-up cassette should be released from the satellite and returned to Earth. The film cassette was in a capsule, called the "bucket." The satellite was orbiting at 100 miles in altitude; thus, the capsule had a heat shield to protect it during reentry. At about 60,000 feet in altitude, a large parachute deployed and the bucket gradually floated down. A plane would capture the parachute and capsule in mid-air. This entire process had to be done during the day time and under certain weather conditions. Thus, the release time of the capsule had to be precisely calculated. It was designed to float in water for a short period of time, and then sink, if the mid-air recovery failed. The diagram shows a second take-up cassette. Once the first capsule was released, the second take-up cassette became active. The CORONA satellites used 31,500 ft (9,600 m) of special 70 mm film with a 24 inch (0.6 m) focal length lens. The CORONA Program continued until 1972 with the last image taken on May 31. Over 800,000 images were acquired. There were 121 launches and 156 recoveries.

Corona Camera System - Recovery of Return Capsule

 

The first CORONA reconnaissance photograph was taken on August 8, 1960. It shows the Mys Shmidta Air Field, U.S.S.R. Both the runway and parking apron can be identified. In comparison to the U-2 photograph, the image lacks detail. This is due to the ground resolution being 40ft by 40ft. However, the situation was rectified within a few years with the KH-4 having a 5ft by 5ft resolution. (By this time the CORONA satellites were renamed to the Keyhole (KH) satellites.) The below KH-4 image provides detailed information about the Soviet Airfield, Minerallye Vod. The quality of this photograph is as good as a U-2 photograph, which is rather remarkable since it was taken about 10 times higher in altitude than a U-2 photograph.

First CORONA picture - The USSR's Minerallye Vod Airfield in he mid-1960s

 

Although the first CORONA photograph does not appear to provide much information, the detection of the Mys Shmidta Airfield made the United States aware of the Soviet Union's military strategy for the Arctic. In 1954, the Mys Shmidta Airfield was developed as part of the plan to create a ring of Soviet Air Force bases around the Arctic for the use of its strategic bomber fleet during a war. Mys Shmidta was one of the airfields that formed the ring of forward staging bases inside the Arctic Circle. The use of strategic bomber forward staging bases was dictated by geography and weather. The northern parts of the Soviet Union closest to the United States are in the Arctic, with hostile weather conditions. Consequently, Soviet strategic bombers were normally stationed at bases in more temperate parts of the Soviet Union, flying training missions from these Arctic forward staging bases during warmer periods. Mys Shmidta is the eastern most airfield of these bases and is located a very short distance from Alaska. Thus, the United States maintains constant satellite coverage of the base even though Russia claims that the base is not being used for any military purpose.

In 1967, President Lyndon Johnson, in a talk to a group of educators, responded to criticism that too much was being spent on space endeavors and not enough on the War on Poverty by saying:

"I wouldn't want to be quoted on this, but we've spent thirty-five or forty billion dollars on the space program. And if nothing else had come out of it except the knowledge we've gained from space photography, it would be worth ten times what the whole program has cost. Without satellites, I'd be operating by guess. But tonight we know how many missiles the enemy has, and it turned out our guesses were way off. We were doing things we didn't need to do. We were building things we didn't need to build. We were harboring fears we didn't need to harbor."

This talk was based on imagery obtained from CORONA/KN satellites.

On February 24, 1995, Vice President Al Gore visited CIA Headquarters to announce Executive Order 12951. This order, signed by President Clinton, released CORONA/KH imagery to the public. At the announcement, Gore stated, "Satellite coverage gave us the confidence to pursue arms control agreements, agreements that eventually led to dramatic decreases in the number of nuclear weapons and their delivery systems." Gore also noted that satellites, "Recorded much more than the landscape of the Cold War. In the process of acquiring this priceless data, we recorded for future generations the environmental history of the Earth at least a decade before any country on this Earth launched any Earth resource satellites." Gore pushed Clinton to release the CORONA/KH imagery for environmental research. Gore received the 2007 Nobel Peace Prize for his effort to disseminate great knowledge about man-made climate changes. The original film and technical mission-related documents are maintained by the National Archives and Records Administration (NARA). A duplicate set of the film is held by the USGS EROS Center and is used to produce digital copies of the imagery.

A good example of CORONA photography being used to record the history of an environmental event is the 1964 image of the Aral Sea. A comparison of an 1848/49 Russian survey map of the sea and the 1964 CORONA/KH image shows that the area size of the sea had not changed in size in the 115 years between 1848/49 and 1964. Using imagery from various satellites the below slide records how the sea shrank in area in the 36 years between 1977 and 2013.

In the 1960s the former Soviet Union started diverting water from rivers that previously flowed into the Aral Sea for irrigation projects. This diversion of water resulted in the World's fourth largest lake with an area of 26,300 sq mi, shrinking in size. The 1964 CORONA/KH image provides a benchmark as to when the shrinking process started and linking the cause of the shrinkage to the Soviet Union's agricultural practices. By 1999, it had declined to 40% of its original size, splitting into four lakes: the North Aral Sea, the eastern and western basins of the once far larger South Aral Sea, and the smaller intermediate Barsakelmes Lake. By 2013 the area size of the Aral Sea had dropped by 92% of its 1964 size. A similar event has occurred in Africa with the shrinkage of Lake Chad and again a CORONA/KH image provides a benchmark as to when river water was being diverted from flowing into the Lake and instead being used for irrigation.

Satellite Imagery of the Aral Sea, 1977-2013

 

The CORONA Program continued until 1972 with the last image taken on May 31, 1972. Over 800,000 images were acquired. There were 121 launches and 156 recoveries. It is these images that are available for the public to view and use. The CORONA Program gradually transitioned into the Keyhole Program.

The Keyhole series of satellites continue to orbit the Earth. Little is known about the current KH-13 or KH-14. A KH-12 is a $1 billion satellite that has a Hubble Space Telescope, except it is looking at our planet. Several KH-12s are circling the planet. For security reasons, there are no published images from these spacecrafts. They have an imaging resolution of 5-6 inches, which means they can see something 5 inches or larger on the ground. Most likely they cannot read your house number but they can detect you standing in your front yard.


Digital Technology

 

Computers have made our world digital in nature. Thus, a short history on the development of computer technology is needed in order to understand the change from analog to digital remote sensing. Without computers it would not be possible to handle the thousands of digitally based remote sensing images acquired each year. Also, computers have offered a new means of analyzing, both statistically and graphically, images that did not exist with analog based technology.

Between the 1930s and 1950s the only computers available were found in major research laboratories and large universities with strong academic disciplines in certain fields of engineering, especially electrical engineering. These computers were one of kind machines that were not designed for a large commercial market. They were error-prone, very hard to use, and expensive to maintain.

By the 1960s large mainframe computers were becoming more common and being built for large companies and the U.S. military and space programs. IBM became the unquestioned market leader in selling these large machines. Universities in general started acquiring mainframes and minicomputers for administrative functions and for various scientific research endeavors. (Minicomputers were small mainfranes introduced in 1960 for certain business enterprises and scientific applications and should not be confused with microcomputers.) These machines could be programmed by FORTRAN, a programming language suited to numeric computation and scientific computing. During the 1960s and 1970s only highly trained individuals could program these machines. The primary output devices for these machines were the high speed printers designed to print numeric and alphabetic characters. They were not meant to provide graphics.

IBM 1403 High Speed Printer (Left) and Digital Equipment Corporation minicomputer

 

By the late 1970s the microcomputer appeared on the market. Apple, Tandy and IBM were the main manufacturers. Initially, these personal computers were viewed as toys to be used for entertainment and rudimentary calculations. At the same time mainframes were being designed to handle networks of computer terminals.

Corresponding time wise to the development of computers and remote sensing was the new field based on cartography called geographic information systems (GIS). In the 1960s the Laboratory for Computer Graphics and Spatial Analysis at Harvard University was established. This Lab conceived a computer mapping software program called SYMAP, which used high speed printers to produce maps. The map of Connecticut is a sample of a chorographic type map created by SYMAP. This software package was also able to form isoplethic and proximal type maps. Crude in appearance by today's standards of computer maps, the programming of SYMAP provided the means for using high spend printers to display remotely sensed images. The second sample of a line printer graphic output shows a very small portion of a remotely sensed satellite image. The overprinting of alphanumeric characters provided extra grey tone levels to the output. To visualize and interpret such output would generally require standing several feet back from it.

SYMAP Line Printer Map (Left) and Satellite Line Printer Image

 

In the mid-1970s, a better output device for displaying images was developed. It was called the Comtal Image Processing System. The next slide shows such a system. Not shown in the slide is the computer terminal that was located next to the monitor and used to transmit commands to a mini-computer. The mini-computer would calculate the desired results and display the results on the Comtal monitor. A trained staff was needed to run the mini-computer. This entire system was costly to acquire and maintain and few universities and research centers had the resources for such a system.

Comtal Image Processing System (Left) and Microcomputer Image Processing Workstation

 

In the mid 1980s the microcomputer was no longer a toy and could be used to handle advanced processing. Apple and IBM had come out with new models of microcomputers. Apple released the first generation Macintosh, which was the first computer to come with a graphical user interface and a mouse. The IBM microcomputer with its mother board technology allowed other companies to insert special boards into the computer. The second slide shows an IBM XT microcomputer and a special monitor that is linked to a special image processing board inside the computer. Software was developed to run on the microcomputer and use functions available on the board to create images in real time. This system cost considerably less than the Comtal system and became the forerunner of today's microcomputer image processing workstations.

The Laboratory for Applied Remote Sensing (LARS) at Purdue University was a major research center for the development of digital remote sensing during the period from the mid-1960s to 2000. Purdue was one of the major engineering universities in the country, especially in the field of aero-space engineering. The University had its own fleet of planes, which made it possible to conduct aerial based digital remote sensing. In addition, the University had strong ties with IBM. IBM provided the first mainframe in 1966. It was an IBM 360 Model 44 computer, one of the first of this model delivered by IBM. It was on this machine that the software package known as LARSYS was developed. LARSYS was the first software system capable of processing multispectral image data. This system evolved over the years and, based upon copies given to other research labs, became the standard across the country for research labs working in the field of digital remote sensing. Revisions of it formed the basis for early software systems at Pennsylvania State University, Texas A&M University, Oregon State University, the NASA/Johnson Space Center, and a number of other sites, and a portion of it was incorporated into the Jet Propulsion Lab's VICAR system, a system for manipulating image data that had developed out of early space probes sent to the moon and planets. The moon imagery relates back to Sherman Fairchild's moon camera, discussed in the previous module.

In 1970 the LARS computer facility was upgraded to an IBM 360 Model 67 machine. This system, which was a time-shared system and was one of the larger processors available at the time, was used to establish a remote terminal system, such that a potential user of the technology could try it out in an inexpensive way before needing to make a commitment to it. This system, upgraded several times, served as a major technology transfer tool. It also served as the primary processing capability for LARS and a number of other research labs. A map shows the sites to which direct leased lines to LARS and its LARSYS existed. Sites as far away as Australia used the system on a dial-up basis.

LARS Remote Terminal Network

 

NASA's Earth Resources Laboratory (ERL), located at the time in Slidell, Louisiana, obtained the LARSYS software and modified and updated it to work on minicomputers with the previously discussed Comtal image processing systems. This movement from a large mainframe computing environment to a minicomputer system provided the opportunity for state government agencies and other large universities to have direct access to digital remote sensing. ERL offered mini-courses on remote sensing using its software and minicomputer platform and helped states and universities to obtain the software and hardware. Eventually microcomputers were able to handle advanced processing and digital remote sensing migrated to this platform. In general, minicomputers are no longer being produced.


Landsat

 

In 1965, the director of the U.S. Geological Survey (USGS), William Pecora, proposed the idea of a remote sensing satellite program to gather facts about the natural resources of our planet. Pecora stated that the program was "conceived largely as a direct result of the demonstrated utility of the Mercury and Gemini orbital photography to Earth resource studies." While weather satellites had been monitoring the Earth's atmosphere since 1960 and were largely considered useful, there was no appreciation of the Earth’s surface from space until the mid-1960s.When the idea of an Earth resources satellite was first proposed, it met with intense opposition from the Bureau of Budget and those who argued high-altitude aircraft would be the fiscally responsible choice for Earth remote sensing. Concurrently, the Department of Defense feared that a civilian satellite program would compromise the secrecy of their reconnaissance missions. Additionally, geopolitical concerns existed pertaining to photographing foreign countries without their permission. In 1965, NASA began methodical investigations of the Earth through remote sensing using instruments mounted on planes. In 1966, the USGS convinced the Secretary of the Interior, Stewart L. Udall, to announce that the Department of the Interior (DOI) was going to proceed with its own Earth-observing satellite program. This savvy political stunt coerced NASA to expedite the building of an Earth resources satellite. But, budgetary constraints and sensor disagreements between application agencies (notably the Department of Agriculture and DOI) again stymied the satellite construction process. Finally, by 1970 NASA had a green light to build a satellite. Remarkably, within only two years, the satellite was launched, heralding a new age of remote sensing of land from space.

ERTS (Artists View) and ERTS being assembled.

 

Launched on July 23, 1972 the satellite was known initially as the Earth Resources Technology Satellite (ERTS). It was the first Earth-observing satellite to be launched with the express intent to study and monitor our planet's landmasses. To perform the monitoring, ERTS carried two remote sensing instruments: a camera system built by the Radio Corporation of America (RCA) called the Return Beam Vidicon (RBV), and the Multispectral Scanner (MSS) built by the Hughes Aircraft Company. The RBV was supposed to be the primary instrument but it became the source of an electrical transient that caused the satellite to briefly lose altitude control. It became necessary to shut down the RBV instrument in order to maintain the operation of the satellite. Only a small number of images were recorded using the RBV.

RBV Camera System (Left) and MSS Scanning Device

 

The MSS instrument was flown as the secondary and highly experimental instrument. "But once we looked at the data, the roles switched," relates Stan Freden, ERTS Project Scientist. In the foreword of the U.S. Geological Survey's ERTS-1 A New Window on Our Planet, published in 1976, then-director of the USGS, Dr. V. E. McKelvey, wrote: "The ERTS spacecraft represents the first step in merging space and remote-sensing technologies into a system for inventorying and managing the Earth’s resources." ERTS operated until January 1978, outliving its design life by five years. The MSS system acquired over 300,000 images providing repeated coverage of the Earth's land surfaces. The quality and impact of the resulting information exceeded all expectations. ERTS was renamed to Landsat, a name that it continues to have.

The RBV camera system was designed to obtain high resolution television pictures of the Earth. Three cameras were developed to take the pictures, simultaneously in three different regions within the electromagnetic spectrum. The cameras were similar except for the spectral filters contained in the lens assemblies to provide separate spectral pictures. Camera 1 covered the visible blue-green portion of the spectrum; Camera 2 the visible orange-red portion; and Camera 3 the red-near-infrared portion. They were designated Bands 1-3. The three RBV cameras on Landsats 1 and 2 were aligned to view the same square ground area. The three Earth-oriented cameras were mounted on a common base, structurally isolated from the spacecraft to maintain accurate alignment. When the cameras were shuttered, the pictures were stored on the RBV photosensitive surfaces, and then scanned to produce video signal output. Video data from the RBV were transmitted in both real-time and tape-recorder modes. On Landsat 3, two RBV cameras were used. The two cameras provided side-by-side images. Each camera operated independently allowing for single frame or continuous coverage. They each had the same broad-based spectral range (yellow to near IR) at 505 to 750 nanometers. These changes were made to provide increased ground resolution for studying the Earth's land surface.

Since the RBV camera system failed to perform successfully on Landsats 1-3, it was not used again on future Landsats. The failure of the RBV camera system and the success of the MSS system represents a major shift in remotely sensing, a change from analog pictures to digital images.

MSS bands of pivot-irrigation fields in western Oregon, near the Columbia River.

 

The MSS was a line scanning device observing the Earth perpendicular to the orbital track of the satellite. The cross-track scanning was accomplished by an oscillating mirror. Six lines were scanned simultaneously in each of the four spectral bands for each mirror sweep. The forward motion of the satellite provided the along-track scan line progression. The first five Landsats carried the MSS sensor which responded to Earth-reflected sunlight in four spectral bands. On Landsats 1-3 these bands were numbered 4-7 since the three RBV bands were 1-3. On Landsats 4-5 the MSS bands were identified as 1-4. Landsat 3 carried an MSS sensor with an additional band, designated band 8, that responded to thermal (heat) infrared radiation.

An MSS scene had an Instantaneous Field Of View (IFOV) of 68 meters in the cross-track direction by 83 meters in the along-track direction (223.0 by 272.3 feet respectively). Thus, a single, rectangular ground point was 68 by 83 meters in size, slightly larger than one acre of land area. A scan line was generally 185 km (115 mi.) in length or 2,722 ground points. A single image normally covered a ground area of 185 km (115 mi.) by 185 km (115 mi.) and recorded over 7.4 million ground points. An MSS recorded four data values for each ground point. Each of the MSS data values corresponded to a portion of the electromagnetic spectrum. Consequently, each MSS data set had over 29.6 million pieces of data (7.4 x 4). Landsat 1 operated until January 1978, outliving its design life by five years. As previously indicated the Landsat 1 MSS system acquired alone over 300,000 images providing repeated coverage of the Earth's land surfaces. Landsats 2 and 3 had similar totals. The quality and impact of the resulting information exceeded all expectations. To handle this many images (data sets) the power of the computer was needed.

Landsat Orbit Configuration

 

Landsats 1-3 orbited at an altitude of 570 miles (923 km). The orbits were near-polar (inclined 9.09° from a longitudinal line) and Sun-synchronous (pass every time over the equator between 9:30 and 10:00 AM), making 14 passes in descending mode (southward from the North Pole in the daylight mode) each day (about 103 minutes for a complete orbital circuit). After any given orbit, the spacecraft’s next orbit was some 1775 miles (2875 km) to the west. On orbit 15, the next day, the spacecraft was 98 miles (159 km) at the equator westward of orbit 1 from the previous day. After 252 orbits, or 18 days later the spacecraft flew over the same path that it did on the first day. This orbit configuration changed slightly with Landsats 4-5.

Landsat 4 was launched on July 16, 1982. It was significantly different than that of the previous Landsats. First, it did not carry the RBV instrument. Second, in addition to the MSS sensor, Landsat 4 carried a new scanning sensor known as the Thematic Mapper (TM). TM instrument had seven spectral bands. Data were collected from the blue, green, red, near-infrared, mid-infrared (2 bands) and thermal infrared portions of the electromagnetic spectrum. Except for Band 6 a TM image had an Instantaneous Field Of View (IFOV) of 30m x 30m, roughly one-fourth to one-fifth of an acre. Band 6 had an IFOV of 120m x 120m on the ground. Methods were developed to make Band 6 data correspond geometrically to the 30m x30m spatial resolution of the other bands. Third, data were downloaded to the Tracking and Data Relay Satellite System (TDRSS) and the TDRSS could then relay that information to ground stations. With Landsats 1-3, MSS data were stored on recorders and downloaded when the spacecraft crossed a ground station. The recorders failed after a certain amount of time. Landsat 4 had several operational problems and its use was limited. It was finally decommissioned in 2001.

Landsat 4 and 5

Below is a TM data set that covers Charleston, South Carolina. The first three bands shown on the top row display the three visible portions of the electromagnetic spectrum (blue, green, and red). Band 4 in the middle row covers a section of the near infrared portion of the spectrum, Band 5 a mid-infrared, and Band 6 the thermal infrared. Band 7 relates to a second section of the mid-infrared range of the spectrum. These bands can be combined to form a color composite image. The middle image on the bottom row is a true color composite, sometimes referred to as a natural color composite. It was created by using the three visible bands. The last image on the bottom row shows a false color composite. It is based on using the near infrared band with two of the visible bands. Numerous combinations can be developed, with each combination potentially bringing forth new insights about the area being studied.

Thematic Mapper Bands –Charleston, South Carolina

 

On March 1, 1984, NASA launched Landsat 5. Landsat 5 was designed and built at the same time as Landsat 4 and carried the same payload: the MSS and the TM sensors. Although it also had some operational problems, it remained in use for nearly 29 years and was the workhorse of the Earth resource satellites. It orbited the planet over 150,000 times while acquiring over 2.9 million images of land surface conditions around the world. It was decommissioned in November 2011.It was built by Fairchild Industries.

Initially, all imagery from the Landsats was made basically free to anyone or any country in the World. Environmental issues do not always begin and stop at political boundaries. However, in 1979, Presidential Directive 54 under President Jimmy Carter transferred Landsat operations from NASA to NOAA. The directive also recommended the development of a long term operational system with four additional satellites beyond Landsat 3, and recommended the transition of the program to the private sector. This occurred in 1985 when the Earth Observation Satellite Company (EOSAT), a partnership of Hughes Aircraft and RCA, was selected by NOAA to operate the Landsat system under a ten year contract. EOSAT operated Landsats 4 and 5; had exclusive rights to market Landsat data; and was to build Landsats 6 and 7. Under privatization Landsat data was no longer free. The price of an image, for the lack of a more appropriate word, "skyrocketed" well beyond what the normal user could afford. Given a large outcry about the outcome of privatization, Congress facilitated the Land Remote Sensing Policy Act of 1992, which instructed the Landsat Program to be taken back by the Federal Government and the distribution of Landsat image to be handled by U.S. Geological Survey (USGS). The change resulted in the development of the USGS Earth Resources Observation Systems (EROS) Data Center. EROS receives, processes, and distributes data from NASA Landsat satellites, as well as aerial photographs gathered for the USGS and other agencies. Again, the imagery is basically free and can be downloaded to anyone's personal computer.

Launched in 1999, Landsat 7 was the last of the Landsats from the 20th Century. With the beginning of the 21st Century new Landsats are orbiting the planet recording the mosaic of environments that make up the face of the Earth. This instructional module concludes this history of remote sensing.